12 research outputs found

    Clinical Knowledge Platform (CKP): a collaborative ecosystem to share interoperable clinical forms, viewers, and order sets with various EMRs

    Get PDF
    International audienceA large number of Electronic Medical Records (EMR) are currently available with a variety of features and architectures. Existing studies and frameworks presented some solutions to overcome the problem of specification and application of clinical guidelines toward the automation of their use at the point of care. However, they could not yet support thoroughly the dynamic use of medical knowledge in EMRs according to the clinical contexts and provide local application of international recommendations. This study presents the development of the Clinical Knowledge Platform (CKP): a collaborative interoperable environment to create, use, and share sets of information elements that we entitled Clinical Use Contexts (CUCs). A CUC could include medical forms, patient dashboards, and order sets that are usable in various EMRs. For this purpose, we have identified and developed three basic requirements: an interoperable, inter-mapped dictionary of concepts leaning on standard terminologies, the possibility to define relevant clinical contexts, and an interface for collaborative content production via communities of professionals. Community members work together to create and/or modify, CUCs based on different clinical contexts. These CUCs will then be uploaded to be used in clinical applications in various EMRs. With this method, each CUC is, on the one hand, specific to a clinical context and on the other hand, could be adapted to the local practice conditions and constraints. Once a CUC has been developed, it could be shared with other potential users that can consume it directly or modify it according to their needs

    Exploring Novel Funding Strategies for Innovative Medical Research: The HORAO Crowdfunding Campaign.

    Get PDF
    BACKGROUND The rise of the internet and social media has boosted online crowdfunding as a novel strategy to raise funds for kick-starting projects, but it is rarely used in science. OBJECTIVE We report on an online crowdfunding campaign launched in the context of the neuroscience project HORAO. The aim of HORAO was to develop a noninvasive real-time method to visualize neuronal fiber tracts during brain surgery in order to better delineate tumors and to identify crucial cerebral landmarks. The revenue from the crowdfunding campaign was to be used to sponsor a crowdsourcing campaign for the HORAO project. METHODS We ran a 7-week reward-based crowdfunding campaign on a national crowdfunding platform, offering optional material and experiential rewards in return for a contribution toward raising our target of Swiss francs (CHF) 50,000 in financial support (roughly equivalent to US $50,000 at the time of the campaign). We used various owned media (websites and social media), as well as earned media (press releases and news articles) to raise awareness about our project. RESULTS The production of an explanatory video took 60 hours, and 31 posts were published on social media (Facebook, Instagram, and Twitter). The campaign raised a total of CHF 69,109. Approximately half of all donations came from donors who forwent a reward (CHF 28,786, 48.74%); the other half came from donors who chose experiential and material rewards in similar proportions (CHF 14,958, 25.33% and CHF 15,315.69, 25.93%, respectively). Of those with an identifiable relationship to the crowdfunding team, patients and their relatives contributed the largest sum (CHF 17,820, 30.17%), followed by friends and family (CHF 9288, 15.73%) and work colleagues (CHF 6028, 10.21%), while 43.89% of funds came from donors who were either anonymous or had an unknown relationship to the crowdfunding team. Patients and their relatives made the largest donations, with a median value of CHF 200 (IQR 90). CONCLUSIONS Crowdfunding proved to be a successful strategy to fund a neuroscience project and to raise awareness of a specific clinical problem. Focusing on potential donors with a personal interest in the issue, such as patients and their relatives in our project, is likely to increase funding success. Compared with traditional grant applications, new skills are needed to explain medical challenges to the crowd through video messages and social media

    Brain SegNet: 3D local refinement network for brain lesion segmentation.

    Get PDF
    MR images (MRIs) accurate segmentation of brain lesions is important for improving cancer diagnosis, surgical planning, and prediction of outcome. However, manual and accurate segmentation of brain lesions from 3D MRIs is highly expensive, time-consuming, and prone to user biases. We present an efficient yet conceptually simple brain segmentation network (referred as Brain SegNet), which is a 3D residual framework for automatic voxel-wise segmentation of brain lesion. Our model is able to directly predict dense voxel segmentation of brain tumor or ischemic stroke regions in 3D brain MRIs. The proposed 3D segmentation network can run at about 0.5s per MRIs - about 50 times faster than previous approaches Med Image Anal 43: 98-111, 2018, Med Image Anal 36:61-78, 2017. Our model is evaluated on the BRATS 2015 benchmark for brain tumor segmentation, where it obtains state-of-the-art results, by surpassing recently published results reported in Med Image Anal 43: 98-111, 2018, Med Image Anal 36:61-78, 2017. We further applied the proposed Brain SegNet for ischemic stroke lesion outcome prediction, with impressive results achieved on the Ischemic Stroke Lesion Segmentation (ISLES) 2017 database

    Volumetric Food Quantification Using Computer Vision on a Depth-Sensing Smartphone: Preclinical Study.

    Get PDF
    BACKGROUND Quantification of dietary intake is key to the prevention and management of numerous metabolic disorders. Conventional approaches are challenging, laborious, and lack accuracy. The recent advent of depth-sensing smartphones in conjunction with computer vision could facilitate reliable quantification of food intake. OBJECTIVE The objective of this study was to evaluate the accuracy of a novel smartphone app combining depth-sensing hardware with computer vision to quantify meal macronutrient content using volumetry. METHODS The app ran on a smartphone with a built-in depth sensor applying structured light (iPhone X). The app estimated weight, macronutrient (carbohydrate, protein, fat), and energy content of 48 randomly chosen meals (breakfasts, cooked meals, snacks) encompassing 128 food items. The reference weight was generated by weighing individual food items using a precision scale. The study endpoints were (1) error of estimated meal weight, (2) error of estimated meal macronutrient content and energy content, (3) segmentation performance, and (4) processing time. RESULTS In both absolute and relative terms, the mean (SD) absolute errors of the app's estimates were 35.1 g (42.8 g; relative absolute error: 14.0% [12.2%]) for weight; 5.5 g (5.1 g; relative absolute error: 14.8% [10.9%]) for carbohydrate content; 1.3 g (1.7 g; relative absolute error: 12.3% [12.8%]) for fat content; 2.4 g (5.6 g; relative absolute error: 13.0% [13.8%]) for protein content; and 41.2 kcal (42.5 kcal; relative absolute error: 12.7% [10.8%]) for energy content. Although estimation accuracy was not affected by the viewing angle, the type of meal mattered, with slightly worse performance for cooked meals than for breakfasts and snacks. Segmentation adjustment was required for 7 of the 128 items. Mean (SD) processing time across all meals was 22.9 seconds (8.6 seconds). CONCLUSIONS This study evaluated the accuracy of a novel smartphone app with an integrated depth-sensing camera and found highly accurate volume estimation across a broad range of food items. In addition, the system demonstrated high segmentation performance and low processing time, highlighting its usability

    Crowdfunding for Innovative Medical Research: the HORAO Crowdfunding Campaign

    No full text
    Background: The rise of the internet and of social media has boosted online crowdfunding as a novel strategy to raise funds for kick-starting projects, but as yet it is rarely used in science. Objective: We report on an online crowdfunding campaign launched in the context of the neuroscience project HORAO. HORAO’s aim is to develop a non-invasive real-time method to visualize neuronal fiber tracts during brain surgery in order to better delineate tumors and to identify crucial cerebral landmarks. The revenue from the crowdfunding campaign was to be used to sponsor a crowdsourcing campaign for the HORAO project. Methods: We ran a 7-week reward-based crowdfunding campaign on a national crowdfunding platform, offering optional material and experiential rewards in return for a contribution toward raising our target of CHF 50,000 in financial support (Swiss francs; roughly equivalent to 50,000 United States dollars at the time of the campaign). We used various owned media (websites and social media) as well as earned media (press releases and news articles) to raise awareness about our project. Results: The production of an explanatory video took 60 hours, and 31 posts were published on social media (Facebook, Instagram, and Twitter). The campaign raised a total of CHF 69,109. Approximately half of all donations came from donors who forwent a reward (49%); the other half came from donors who chose experiential and material rewards in similar proportions (26% and 25%, respectively). Of those with an identifiable relationship to the crowdfunding team, patients and their relatives contributed the largest sum (30%), followed by friends and family (16%) and work colleagues (10%), while 44% of funds came from donors who were either anonymous or had an unknown relationship to the crowdfunding team. Patients and their relatives made the largest donations, with a median value of CHF 200 (interquartile range [IQR] = 90). Conclusions: Crowdfunding proved to be a successful strategy to fund a neuroscience project and to raise awareness of a specific clinical problem. Focusing on potential donors with a personal interest in the issue, such as patients and their relatives in our project, is likely to increase funding success. Compared to traditional grant applications, new skills are needed to explain medical challenges to the crowd through video messages and social media

    Clinical Knowledge Platform (CKP): A Collaborative Ecosystem to Share Interoperable Clinical Forms, Viewers, and Order Sets with Various EMRs

    No full text
    International audienceA large number of Electronic Medical Records (EMR) are currently available with a variety of features and architectures. Existing studies and frameworks presented some solutions to overcome the problem of specification and application of clinical guidelines toward the automation of their use at the point of care. However, they could not yet support thoroughly the dynamic use of medical knowledge in EMRs according to the clinical contexts and provide local application of international recommendations. This study presents the development of the Clinical Knowledge Platform (CKP): a collaborative interoperable environment to create, use, and share sets of information elements that we entitled Clinical Use Contexts (CUCs). A CUC could include medical forms, patient dashboards, and order sets that are usable in various EMRs. For this purpose, we have identified and developed three basic requirements: an interoperable, inter-mapped dictionary of concepts leaning on standard terminologies, the possibility to define relevant clinical contexts, and an interface for collaborative content production via communities of professionals. Community members work together to create and/or modify, CUCs based on different clinical contexts. These CUCs will then be uploaded to be used in clinical applications in various EMRs. With this method, each CUC is, on the one hand, specific to a clinical context and on the other hand, could be adapted to the local practice conditions and constraints. Once a CUC has been developed, it could be shared with other potential users that can consume it directly or modify it according to their needs

    On the interpretability of artificial intelligence in radiology: challenges and opportunities

    No full text
    As artificial intelligence (AI) systems begin to make their way into clinical radiology practice, it is crucial to assure that they function correctly and that they gain the trust of experts. Toward this goal, approaches to make AI "interpretable" have gained attention to enhance the understanding of a machine learning algorithm, despite its complexity. This article aims to provide insights into the current state of the art of interpretability methods for radiology AI. This review discusses radiologists' opinions on the topic and suggests trends and challenges that need to be addressed to effectively streamline interpretability methods in clinical practice. Supplemental material is available for this article. © RSNA, 2020 See also the commentary by Gastounioti and Kontos in this issue.NIH -National Institutes of Health(1Z01 CL040004

    Machine Learning–Based Prediction Models for Different Clinical Risks in Different Hospitals: Evaluation of Live Performance

    No full text
    BackgroundMachine learning algorithms are currently used in a wide array of clinical domains to produce models that can predict clinical risk events. Most models are developed and evaluated with retrospective data, very few are evaluated in a clinical workflow, and even fewer report performances in different hospitals. In this study, we provide detailed evaluations of clinical risk prediction models in live clinical workflows for three different use cases in three different hospitals. ObjectiveThe main objective of this study was to evaluate clinical risk prediction models in live clinical workflows and compare their performance in these setting with their performance when using retrospective data. We also aimed at generalizing the results by applying our investigation to three different use cases in three different hospitals. MethodsWe trained clinical risk prediction models for three use cases (ie, delirium, sepsis, and acute kidney injury) in three different hospitals with retrospective data. We used machine learning and, specifically, deep learning to train models that were based on the Transformer model. The models were trained using a calibration tool that is common for all hospitals and use cases. The models had a common design but were calibrated using each hospital’s specific data. The models were deployed in these three hospitals and used in daily clinical practice. The predictions made by these models were logged and correlated with the diagnosis at discharge. We compared their performance with evaluations on retrospective data and conducted cross-hospital evaluations. ResultsThe performance of the prediction models with data from live clinical workflows was similar to the performance with retrospective data. The average value of the area under the receiver operating characteristic curve (AUROC) decreased slightly by 0.6 percentage points (from 94.8% to 94.2% at discharge). The cross-hospital evaluations exhibited severely reduced performance: the average AUROC decreased by 8 percentage points (from 94.2% to 86.3% at discharge), which indicates the importance of model calibration with data from the deployment hospital. ConclusionsCalibrating the prediction model with data from different deployment hospitals led to good performance in live settings. The performance degradation in the cross-hospital evaluation identified limitations in developing a generic model for different hospitals. Designing a generic process for model development to generate specialized prediction models for each hospital guarantees model performance in different hospitals
    corecore